DiscoverInterconnectsInterviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning
Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning

Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning

Update: 2024-12-05
Share

Description

Finbarr Timbers is an AI researcher who writes Artificial Fintelligence — one of the technical AI blog’s I’ve been recommending for a long time — and has a variety of experiences at top AI labs including DeepMind and Midjourney. The goal of this interview was to do a few things:

* Revisit what reinforcement learning (RL) actually is, its origins, and its motivations.

* Contextualize the major breakthroughs of deep RL in the last decade, from DQN for Atari to AlphaZero to ChatGPT. How could we have seen the resurgence coming? (see the timeline below for the major events we cover)

* Modern uses for RL, o1, RLHF, and the future of finetuning all ML models.

* Address some of the critiques like “RL doesn’t work yet.

It was a fun one. Listen on Apple Podcasts, Spotify, YouTube, and where ever you get your podcasts. For other Interconnects interviews, go here.

Timeline of RL and what was happening at the time

In the last decade of deep RL, there have been a few phases.

* Era 1: Deep RL fundamentals — when modern algorithms we designed and proven.

* Era 2: Major projects — AlphaZero, OpenAI 5, and all the projects that put RL on the map.

* Era 3: Slowdown — when DeepMind and OpenAI no longer had the major RL projects and cultural relevance declined.

* Era 4: RLHF & widening success — RL’s new life post ChatGPT.

Covering these is the following events. This is incomplete, but enough to inspire a conversation.

Early era: TD Gammon, REINFORCE, Etc

2013: Deep Q Learning (Atari)

2014: Google acquires DeepMind

2016: AlphaGo defeats Lee Sedol

2017: PPO paper, AlphaZero (no human data)

2018: OpenAI Five, GPT 2

2019: AlphaStar, robotic sim2real with RL early papers (see blog post)

2020: MuZero

2021: Decision Transformer

2022: ChatGPT, sim2real continues.

2023: Scaling laws for RL (blog post), doubt of RL

2024: o1, post-training, RL’s bloom

Interconnects is a reader-supported publication. Consider becoming a subscriber.

Chapters

* [00:00:00 ] Introduction

* [00:02:14 ] Reinforcement Learning Fundamentals

* [00:09:03 ] The Bitter Lesson

* [00:12:07 ] Reward Modeling and Its Challenges in RL

* [00:16:03 ] Historical Milestones in Deep RL

* [00:21:18 ] OpenAI Five and Challenges in Complex RL Environments

* [00:25:24 ] Recent-ish Developments in RL: MuZero, Decision Transformer, and RLHF

* [00:30:29 ] OpenAI's O1 and Exploration in Language Models

* [00:40:00 ] Tülu 3 and Challenges in RL Training for Language Models

* [00:46:48 ] Comparing Different AI Assistants

* [00:49:44 ] Management in AI Research

* [00:55:30 ] Building Effective AI Teams

* [01:01:55 ] The Need for Personal Branding

We mention

* O1 (OpenAI model)

* Rich Sutton

* University of Alberta

* London School of Economics

* IBM’s Deep Blue

* Alberta Machine Intelligence Institute (AMII)

* John Schulman

* Claude (Anthropic's AI assistant)

* Logan Kilpatrick

* Bard (Google's AI assistant)

* DeepSeek R1 Lite

* Scale AI

* OLMo (AI2's language model)

* Golden Gate Claude



Get full access to Interconnects at www.interconnects.ai/subscribe
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning

Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning

Nathan Lambert and Finbarr Timbers